Hey team, are there any guidelines for schema loci...
# apollo-kotlin
g
Hey team, are there any guidelines for schema lociiton in multi repository setup? I have a
base
repo with some shared types, and multiple repos for feature modules. Feature modules will depend on shared types, but they will also independently download the schema and generate types they are using. If I publish metadata and depend on it in my featureModule I’m getting error because schemaFiles is defined upstream, and this makes sense since only 1 module can have the schema.
m
Yep, as you found out, there can be only one schema module. If it's in a separate repo, you cannot use auto detection of used types so you need to generate all required types in your schema module with
alwaysGenerateTyoesMatching.set(listOf(".*))
g
Thanks Martin. I did this and I’m generating all types, but I also need to publish metadata I believe, so feature modules can depend on using
apolloMetadata("com…")
syntax. I saw this and ran the task but
-apollo
doesn’t get published even if the task succeeds. Am I on the right path or do you have any thoughts?
I figured what’s happening here. In your sample project you have
group = "com.jvm"
at the root level so all publications are published to
com.jvm
. In my project I am individually setting groupID for each publication like below:
Copy code
create<MavenPublication>("schema") {
    groupId = "com.test"
    artifactId = "schema"
}
With this setup, my models are published to
com.test
, but Apollo metadata defaults groupId to
root.project.name
m
Any chance you can change the consumer projects to the new maven groupId? (
root.project.name
)?
g
I can also do that. Instead I just set a group id at the root level and it works
m
Ah nice 👍
I didn't realize it was working now, cool 🙂
Also heads up if you're looking at
main
, the setup is slightly different than in the
3.x
branch, you have to create the publication yourself:
Copy code
create<MavenPublication>("apollo") {
      from(components["apollo"])
      artifactId = "jvm-producer-apollo"
    }
At some point we might be able to put the metadata in the main jar, like the Kotlin compiler is doing, that would simplify the setup
But it feels like a high lift.
g
Ok good to now. I was looking at the 3.x branch. Yes that would be cool actually. This is a spike for now. Seeing how it will work if we kept the schema in a separate repository and publish generated codes to consumer feature modules.
m
I'd be curious your feedback on this setup. There is a lot of code to avoid generating too many types in the schema to save on build time for large schemas. But with your setup the schema shouldn't be recompiled very often so it might be a net positive (even with
alwaysGenerateTypesMatching.set(listOf(".*"))
g
I checked the numbers. Currently we are generating half of the types. This will just double the size. Proguard removes unused classes anyway, this shouldn't have any huge impact in my opinion. I will share my findings with you if we decide to go with the trail
👍 1
💙 1
I tested out this approach and it works but I’m also checking if I can make this work by putting bare minimum shared types at the top level module, instead of generating everything. So the apps only generate what they need. I have a shared Context type and I want the top level module/repo to only contain schema + generated model for Context type. The rest should be generated where they are used. This doesn’t work because the downstream modules are using the package name signature of the metadata module instead of using packagename I set in the module. So, shouldn’t it be using the packagename of the feature module for types as well? Ex:
Copy code
// schema.module with [alwaysGenerateTypesMatching=Context]
packageName    = com.schema

generated type = com.schema.Context

//feature module
packageName          = com.feature

generated Query type = com.feature.QueryA
generated X type     = com.schema.X -> Shouldn't this be com.feature.X?
I understand this will result in same class being duplicated under different namespaces but they would be fully decoupled the way fragments/queries are. I guess another downside of this approach is that it will require mapping between two types if they are used in a parent module.
m
I guess another downside of this approach is that it will require mapping between two types if they are used in a parent module.
Exactly! There are 2 kind of symbols generated: • executable (fragment/queries) in the
com.feature
package name • schema types in the
com.schema
package If we generates the schema type in the
com.featureN
module it means you can't share them anymore
thank you color 1
g
I have 2 feature modules A and B. Both define
buttonFragment
, and both modules have different package signature.
com.feature.A.fragment.ButtonFragment
and
com.feature.B.fragment.ButtonFragment
However, I’m getting Duplicate Type error. Their namespaces are different, shouldn’t this pass the checks? Looking at the task, it is looking at the entry key, not the name space. I think this is nice-to-have, to encourage devs to share fragments to avoid duplication but it shouldn’t be mandatory imo. Idk how this works in v4
m
You’re right, fragments should be allowed to be duplicated in different branches.
v4 handles things a bit differently because it forces all schema types in the schema module
Fragments names are checked in each compilation unit against all upstream fragments
g
Ok cool..This is not a major concern for now as most fragment names will be unique and we are planning of having a module with shared fragment types but just wanted to give you feedback on this.
👍 1
💙 1
I have a legacy service {} block and a new service in the same module:
Copy code
service("old") {}
service("new") {}

// dependencies
apolloMetadata("com.new-service-metadata")
When I add
apolloMetadata(…)
, the legacy block (“old”) is complaining because the metadata doesn’t contain a service called “old”. Is there a way to tell the plugin to use the
apolloMetadata
for the
new
service only not both?
m
That part is a bit confusing at the moment. If you have multiple services, you’ll have to add the dependencies to the dedicated configurations:
Copy code
dependencies {
  add("apolloNewSchemaConsumer", project(":schema"))
  add("apolloNewConsumer", project(":schema"))
}
v4 handles that better too, if you can, I’d recommend upgrading
g
Thank you. We will upgrade, it is planned but not soon enough for this project. Thanks!
👍 1
Hey Martin, I have a question with regards
usedCoordinates
. If I add a downstream module using
usedCoordinates
, does that mean all the types used in that downstream module will be generated in the upstream module? My assumption was that, it would only generate conflicting types in the upstream module. What I’m observing is all the used types of the module are pushed to parent module (which is better for my use case, but want to verify my understanding is correct).
b
Hey! Replying because Martin is out for a few days. This should generate all the types used in the feature module indeed, not just the conflicting ones (kind of an automatic version of
alwaysGenerateTypesMatching()
with all the types of the referenced module).
g
Thanks @bod!
m
Yea, I gave up on trying to be too smart there. In v4 all the types are always generated in the schema modules. I think it’s a case of Ockham’s razor where simple is better and easier to reason about even if it’s costing a few incremental rebuilds.
g
I have another thought/question on this topic. If we dedicate one repository as schema/generated types repository this will mean any breaking changes in this repo will impact everyone. So I’m looking into reducing this impact further by reducing schema share on the client side. On the server side we have a super schema made of multiple subgraphs, I’m curious if we can mimic this behaviour on the client side. More like a vertical slice. This fits our use case because we rarely use anything from other backend subgraphs. Any shared types would be in an upstream module, client repos would only download their subschema. If we have these subgraphs in server:
Copy code
- shared-types
- account
- profile
- transactions
On client side; • create a schema module that generate types for
shared-types
subgraph •
account
feature module would depend on the schema module and it also downloads
account
subgraph and generates types from that subgraph, with a namespace unique to this module. •
profile
and
transactions
modules would do the same etc. • if
account
module needs to use types from
profile
, then they download
account
subgraph +
profile
subgraph. This will probably be impossible to implement today, because of the restriction of one schema module (unless we find a hacky way), but want to hear your thoughts to hear if in the future this could be an option.
m
Apologies for the late response! Interestingly, @Yang (hope I have the good handle 🤞, apologies otherwise! ) was also looking at “client-side-schema-composition” a few days ago (#5951)
👍 1
I’m initially not thrilled by this idea as it potentially pulls a lot of complexity in the client (how do we compose? potentially pulling federation in the clients?) but since it has come twice a few days appart, it’s definitely worth exploring
@goku back to your use case, I’m curious what breaking changes you have. Ideally there are no breaking change without a long deprecation period?
g
#5951 Sounds similar. Binary breaking changes we previously discussed. If we use input builders etc we can downgrade them from binary breaking to source breaking but it will still be a breaking change. If
account
module’s subgraph has a breaking change, and
profile
module updates the client schema to the latest
profile
team has to go and fix breaking changes from
account
module. I might be using the wrong term but Microfrontend ARchitecture is the closest thing comes to mind to explain this. Vertical ownership and reduced impact from changes in other subgraphs.
m
Sorry I lost track there. I thought input builders would fix everything?
The problem is changing the order of input fields IIRC?
Copy code
# before
input UserInput {
  name: String
  email: String
}

# after
input UserInput {
  email: String
  name: String
}
In Kotlin:
Copy code
// Before
Input.Builder()
  .name("John Doe")
  .email("<mailto:john@doe.com|john@doe.com>")
  .build()

// After (same code works)
Input.Builder()
  .name("John Doe")
  .email("<mailto:john@doe.com|john@doe.com>")
  .build()
Not saying we shouldn’t discuss client-side schema composition but in the short term, input builders sound like the most pragmatic solution
g
Right, it doesn’t apply to this scenario. I better postpone this vertical schema split until we migrate to v4 🙂 I know there are also cases where devs instantiate Fragment constructors for their test data etc. After migrating to v4 I will check if the concerns are still valid.
👍 1
Hey Martin, I have a question with regards to
usedCoordinates
and
apolloSchema
in v3 TL;DR, How do I consume Schema from another repository similar to how we are consuming Metadata? Longer explanation… Here is my setup:
Copy code
:schema-module
    publishes: 
        - generated types (some of them, not all)
        - metadata


In another repository

:common-models:
   \ :featureA-models:
   \ :featureB-models:
I want to use
usedCoordinates
in the second repository so duplicate types from
featureA
and
featureB
can be generated inside
common-models
. I believe I need two things: 1. Add all modules to
common-models
with
usedCoordinates
2. Add
apolloSchema
to feature modules. No problem with the 1st step but stuck in the second step. • Tried adding
apolloSchema(schema-module-metadata)
to feature modules but not luck. I saw that the json file has
schema
in it so I thought this would work but after checking
.module
file I can see there isn’t
apollo-schema
variant. • Tried adding
apolloSchema(schema-module-metadata)
to
common-models
and
apolloSchema(common-models)
to feature modules but no luck either. • I checked the generated
schema.graphqls
file in
common-models
but it is empty:
Could not find a schema, make sure to add dependencies to the 'apolloSchema' configuration
• And I cannot provide a local one since it says “upstream module already provided a schema”
Copy code
:common-models:
    usedCoordinates(:featureA-models)
    usedCoordinates(:featureB-models)

:featureA-models:
    apolloSchema(?)


:featureB-models:
    apolloSchema(?)
P.S. I can always use
alwaysGenerateTypes
property in
common-models
but I’m avoiding it intentionally so we don’t have thousands of classes for auto complete etc.
m
Hi 👋 2 general comments while I’m processing all of this: • you should really try out v4. It contains a bunch of changes in this area and we’re approching rc1, if we could have your feedback before commiting to the API, that’d be awesome. •
usedCoordinates
doesn’t work cross repositories. That’s because there’s a “somewhat” circular dependency between modules when using
usedCoordinates
. It works in the same repo because it’s not a true circular dependency in terms on task dependencies but it is a true circular dependency in terms of modules.
Re-reading everything, I think that 2nd point is the key there. If you wanted to get the used coordinates accross repositories, you’d have to do something like this: 1. publish IR in
featureA-models
2. consume that iR in
common-models
and publish metadata there 3. consume that metadata in
featureA-models
again So it’s kind of a weird setup
We could make it work but it probably means the
.graphql
files need to live in a separate module from the Kotlin code, this is a line I’m not really ready to cross (but maybe we need?):
Copy code
:common-schema:
    produces common-apolloSchema
:featureA-graphql:
    consumes common-apolloSchema
    produces featureA-apolloIr
:common-types:
    consumes featureA-apolloIr
    produces common-apolloMetadata
    produces com.example.schema.*.kt
:featureA-kotlin:
    consumes common-apolloSchema
    consumes common-apolloMetadata
    consumes featureA-apolloIr
    produces com.example.operation.*.kt
So basically create a module for each task that we have right or something like this... Doesn’t feel great
g
I will try to get on v4 but I’m blocked by migration of another repository to v4 and that’s been taking time. Yeah that doesn’t feel great. Seems like too much setup.
If you wanted to get the used coordinates across repositories
I’m confused about this one. Isn’t it still within the same repository? The types in
schema-module
are already generated and published to jfrog. In the 2nd repository, instead of generating types in the feature modules I want to delegate type generation to
common-models
schema using
usedCoordinates
. Here’s the part I am not understanding. In the entire setup we can have only 1 schema module, and we do. But it is not clear how it is imported to other repository. If I place a local copy of the schema and set it using
schemaFile
I get duplicate schema error. Which means the schema from
metadata.json
is recognized. But, when I use
apolloSchema
that schema is ignored? Until we get to v4 I will probably keep everything in the same module or use
alwaysGenerateTypes
option, but beyond solving my problem I’m curious to understand how the schema from the metadata file is used.
m
Thanks for fix on the migration guide!
g
No worries! I was making an attempt to kickoff the migration in the parent project and noticed that
❤️ 1
m
Which means the schema from
metadata.json
is recognized. But, when I use
apolloSchema
that schema is ignored?
Let me check,
metadata.json
should indeed contain the schema
g
It does. There is
schema
and
sdl
fields. But in the generated folder
schema.graphqls
file is still empty.
m
Got it working here. Looks like you need to add
-apollo
to the maven artifact id. I’m not 100% sure why TBH
Probably trying to avoid clobbering other Kotlin publications
It’s done here if you’re curious
g
Yes, that part is working fine for me. I have no problem publishing/consuming metadata. The part that is not working is
apolloSchema
configuration. I can add
apolloMetadata("com.example:schema-apollo:0.0.1")
to my
common-models
which is in a different repository. No problem with this. Then I add
usedCoordinates(featureA)
and
usedCoordinates(featureB)
to
common-models
. No build error so far, but
usedCoordinates
still not working since there is no schema specified in
featureA
and
featureB
. This is where I get stuck. I need to be able to do in feature modules:
*apolloSchema*("com.example:schema-apollo:0.0.1")
m
Ah gotcha. Yea, this is probably where the chicken/egg problem happens
There’s no publication for
apolloSchema
per say, it’s all in
apolloMetadata
Which in turns depends on
usedCoordinates
, it’s all a big spaghetti meal
FWIW, the schema is not just the schema, it also contains at least the custom scalar mappings (because they need to be the same accross all modules)
We’d need more artifacts
Copy code
:schema-module:apollo-schema:0.0.1
:feature:ir:0.0.1
:schema-module:metadata:0.0.1
:feature:metadata:0.0.1
g
Yes. At least apollo schema one. If we want to dictate that you can only have 1 schema across the setup.
m
The problem is it becomes really weird because a typical release process would be: 1. release
apollo-schema
in common 2. release
apollo-ir
in feature 3. go back to common and now release
apollo-metadata
4. go brack to feature and release the rest of it
I think there is some level of expectation in the community that a single module can be released all at once
Even though technically you could have different publications
It’s.... complicated 😅
module <-> publication <-> Gradle variants <-> where do we draw the lines
g
Yes, it gets really messy lol Btw we dont release the schema from
common
. We only publish
schema-module
Anything inside the second repo is where the apps live, it is the leaf node. Is it difficult create
schema.graphqls
file using the
schema
from
metadata.json
file? Wouldn’t this solve the problem?
m
Btw we dont release the schema from
common
. We only publish
schema-module
Yup sorry, I meant
schema-module
instead of
common
Is it difficult create
schema.graphqls
file using the
schema
from
metadata.json
file? Wouldn’t this solve the problem?
Somewhat?
(thinking out loud)
I don’t think it breaks the cycle though. You need
:feature:ir
to generate
schema-module:metadata
edit artifact list:
Copy code
:schema-module:schema:0.0.1
:feature:ir:0.0.1
:schema-module:metadata:0.0.1
:feature:metadata:0.0.1
In the artifacts above: •
schema
is the GraphQL schema + enforced invariants (scalar mappings, maybe something else, I’d need to double check) •
ir
is the GraphQL operations validated and transformed to an intermediate representation. This is where usedCoordinates are computed from •
metadata
is the list of generated Kotlin models and their matching GraphQL identifier so that downstream codegen can reference them
Producing
metadata
is the part that actually runs
kotlinpoet
g
You need
:feature:ir
to generate
schema-module:metadata
Maybe I’m missing something. But
schema-module:metadata
is already generated in a separate repository and shipped.
m
But
schema-module:metadata
is already generated in a separate repository and shipped.
Then it’s generated without the downstream types
If you look inside
metadata
, there’s a list of
ResolverEntry
Each
ResolverEntry
is a pair GraphQL name, Kotlin name. This is so that downstream codegen knows that if they reference a GraphQL type like
SomeType
, they can use fully qualified name
com.example.type.SomeType.kt
g
Yes, exactly.
schema-module
has some types. (too much context here but in short it is an internal library we’ve been using in a monorepo. now we are splitting to multiple repos so we are reusing this module.) In downstream, feature modules will generate missing types that don’t exist in
schema-module
. However, this is problematic when you have multiple feature modules in a single repo. So I want to generate rest of the missing types in
common-models
.
m
However, this is problematic when you have multiple feature modules in a single repo.
Can you ellaborate what is problematic? Multiple redefinitions?
Oh,
common-models
Intermediate nodes
g
Yes. If both
featureA
and
featureB
has
TypeC
we get duplicate type errors so we want to push this type to
common-models
.
m
Yikes, I was hoping we could avoid that 😅
🥳 2
That’s complex. Also it means that sibling projects affect each other (which is also true if we generate everything in the schema module but a bit easier to wrap one’s head around)
g
Really,,I thought this is how it was meant to be used. You need a direct path from root node (schema) to downstream leaf node, strict hierarchy, and that’s what I’m having here. Every time there are sibling modules in same repo; they have to nominate 1 module as common/metadata module for shared types.
m
Yeaa I had this in mind initially but it created a bunch complexity
Mostly having types moving around
Artifact list with common-models
Copy code
:schema-module:schema:0.0.1
:featureA:ir:0.0.1
:featureB:ir:0.0.1
:common-models:ir:0.0.1
:schema-module:ir:0.0.1
:schema-module:metadata:0.0.1
:common-models:metadata:0.0.1
:featureA:metadata:0.0.1
:featureB:metadata:0.0.1
g
I see..This would have been perfect for us. In our setup, we have a shared repo where this schema module lives. Then the plan is to have apps setup their own models in their repos. 2 mobile apps/repos can then consume the schema from parent schema and generate their own types.
👀 1
m
Do you need the schema module to generate some types?
Put it otherwise, you could share a schema file without applying the apollo Gradle plugin
g
if it was a greenfield setup yes, this could work, but schema module has the generated types from existing feature modules and we wont be able to avoid duplicate class errors if we define schema in multiple places
m
Could you remove the generated types from the schema module (breaking change) and then update each app?
Then each app
common-models
would become the root as far as the Apollo Gradle Plugin hierarchy is concerned
Or are there other consumers that you don’t control? (so breaking change is not an option maybe?)
g
We cannot do that. in the schema repo there will always be some shared components. I think I will go with generating full types in
common-models
for now. I was planning of doing that at the
schema-module
anyway. Major concern was having too many classes for auto complete and such but that should be ok
m
Gotcha 👍. v4.0.0 will mandate that all types are at the “root”. I’ll open an issue to revisit that if it proves too much of a hassle
I hear the feature request but I have also been bitten with types moving around. One problem was the meaning of
packageName
.
g
even when they are in seperate repos? I better get to it asap then
m
Yes, I think so. Every module participating in a “service” needs to have a single “root” module where the all the schema types are generated
But I think we haven’t solved the “circular dependency” problem yet?
in the schema repo there will always be some shared components.
and
generating full types in
common-models
for now
So you’re duplicating some types here? Maybe with different package names?
g
No same package name.
common-models
depends on
schema-module-apollo
, so it will only generate what is already not in
schema-module
m
Ah yes but you have distinct subtrees for app1 and app2
g
yes
m
Or rather:
schema-module
has a single descendant in each binary. This is the important part
If not there could always be the problem that we can’t move a conflict “up” because
schema-module
is published independantly
g
Yes. Exactly. Always a linear path from upstream to downstream and only one descendant so far (not that we are planning to have more)
m
> v4.0.0 will mandate that all types are at the “root”. I’ll open an issue to revisit that if it proves too much of a hassle Actually can you give a shot at v4 and open the issue if the limitation is too big for you ? This will help prioritize
g
Yes, I am testing of getting schema module upgraded now. It will be a big effort but I will see if i can get anywhere soon.
thank you color 1
m
Back to your immediate problem, there might be a way, I’ll try to update the sample
g
Awesome, thank you! I will keep experimenting with this and share what I find
💙 1
m
Alright, I think I got it working by adding a specific publication for the schema: https://github.com/martinbonnin/test-apollo-schema/blob/main/schema/build.gradle.kts#L47
Actually you can just use the new publication in the
common-models
repo and then let that be propagated downstream (commit)
Interestingly, I think v3 also bubbles all the used types to the root if
usedCoordinates
is supplied so it’s not that different from v4
plus1 1
But it allows you to “break” the “usedCoordinates” backlink if needed (ie you don’t use the
app1/common-models
usedCoordinates from
common-schema
)
tldr; 4.0 will require that you generate all the types in the schema module. • If it’s a monorepo then you can propagate usedCoordinates downstream • If it’s not (like in your case), you’ll need to generate all used coordinates there (
listOf(".*")
) If we revisit that, I think it’d make sense to only generate the “conflicts” at a given node, i.e.
selfUsedCoordinates + conflictingDownStreamCoordinates
Please open an issue and we’ll look into it.
g
That’s great. I think this is exactly what I need! For v4, sounds like this set up won’t work and we wont be able to avoid generating full schema, but I will test it out first. Btw, is there a way to debug the new plugin? I’m running operationBased migration action, it shows that it is going through files but it gets stuck. It doesn’t say it gets stuck but overnight it’s been running.
m
we wont be able to avoid generating full schema, but I will test it out first.
Thanks! Let us know how that goes! I’m not opposed to introducing more complex options there but I’d like to quantify the wins and make a clear use case before doing so.
is there a way to debug the new plugin?
Summoning @bod for IntelliJ plugin debugging.
b
Btw, is there a way to debug the new plugin? I’m running operationBased migration action, it shows that it is going through files but it gets stuck. It doesn’t say it gets stuck but overnight it’s been running.
Wow sorry to hear that 😞 So you see the dialog with the progress bar that gets stuck in the middle, is that it? The first thing is to check if there has been a crash in the plugin:
Help
|
Show Log in Finder
and see if you notice any stacktrace in
idea.log
. If not - there aren't a lot of logs in that area of the plugin I'm afraid. So the only way to debug would be to clone the repo and run the plugin from the IDE and add breakpoints 😅. It's a bit much but if you're motivated I can assist.
Actually first, I can also provide you with a version of the plugin with more logs.
g
It doesn’t get stuck, it is always moving and doing something but never ends. I let it run overnight but still not finished. I don’t think it should take that long? I manually made some of the changes and moved on but when I get back to it I will check and let you know. Thanks @bod
b
Huh! That's surprising indeed. Well I have a version with more logs if you get back to it. Thanks!
g
I’m back to it and a version with more logs would be super helpful. This is what happens atm. It is always working but never ending. I checked the logs but it wasn’t clear if some of the errors were result of this. So I will let it run for awhile and see if I can see a pattern in logs.
Maybe this is the error. I see this being reported multiple times for the same ViewModel class:
Copy code
SEVERE - #c.i.p.i.s.PsiSearchHelperImpl - Error during processing of: SomeViewModel.kt
org.jetbrains.kotlin.utils.KotlinExceptionWithAttachments: Unsupported reference
	at org.jetbrains.kotlin.idea.search.usagesSearch.ExpressionsOfTypeProcessor$addClassToProcess$ProcessClassUsagesTask$perform$3.invoke(ExpressionsOfTypeProcessor.kt:245)
	at org.jetbrains.kotlin.idea.search.usagesSearch.ExpressionsOfTypeProcessor$addClassToProcess$ProcessClassUsagesTask$perform$3.invoke(ExpressionsOfTypeProcessor.kt:215)
	at org.jetbrains.kotlin.idea.search.usagesSearch.ExpressionsOfTypeProcessor$searchReferences$1$1.invoke(ExpressionsOfTypeProcessor.kt:951)
	at org.jetbrains.kotlin.idea.search.usagesSearch.ExpressionsOfTypeProcessor$searchReferences$1$1.invoke(ExpressionsOfTypeProcessor.kt:949)
	at com.intellij.openapi.application.ActionsKt.runReadAction$lambda$3(actions.kt:31)
...

at com.apollographql.ijplugin.refactoring.RefactoringKt.findReferences(Refactoring.kt:48)
	at com.apollographql.ijplugin.refactoring.migration.compattooperationbased.item.ReworkInlineFragmentFields.findUsages(ReworkInlineFragmentFields.kt:27)
	at com.apollographql.ijplugin.refactoring.migration.ApolloMigrationRefactoringProcessor.findUsages(ApolloMigrationRefactoringProcessor.kt:86)
	
...

SEVERE - #c.i.p.i.s.PsiSearchHelperImpl - Plugin to blame: Apollo GraphQL version: 4.0.0-beta.7
SEVERE - #c.i.p.i.s.PsiSearchHelperImpl - Last Action: CompatToOperationBasedCodegenMigrationAction
What stands out is: • There are multiple errors I believe but a couple of them are related to fragment usage, from
findReferences(Refactoring.kt:48)
• Couple of them are caused by Kotlin:
Copy code
2024-07-03 14:14:26,726 [187287894] SEVERE - #c.i.p.i.s.PsiSearchHelperImpl - Plugin to blame: Kotlin version: 233.14808.21.2331.11842104-AS
2024-07-03 14:14:26,726 [187287894] SEVERE - #c.i.p.i.s.PsiSearchHelperImpl - Last Action: CompatToOperationBasedCodegenMigrationAction
• Other one is an exception with no details:
Copy code
java.lang.IllegalStateException: root
	at org.jetbrains.kotlin.name.FqNameUnsafe.shortName(FqNameUnsafe.java:138)
	at org.jetbrains.kotlin.name.FqName.shortName(FqName.java:88)
	at com.apollographql.ijplugin.navigation.GraphQLNavigationKt.isApolloOperationOrFragment(GraphQLNavigation.kt:92)
	at com.apollographql.ijplugin.navigation.GraphQLNavigationKt.isApolloOperationOrFragmentReference(GraphQLNavigation.kt:54)
	at com.apollographql.ijplugin.navigation.KotlinDefinitionMarkerProvider.collectNavigationMarkers(KotlinDefinitionMarkerProvider.kt:35)
For the first one, I checked the fragment, it is an inline fragment on an interface. Something like this with multiple layers; a field inside
Square
would also have interface/implementation Maybe the plugin is missing a use case here
Copy code
interface Shape
type Square: Shape
type Circle: Shape

fragment squareOrCircle on Shape {
   ... on Square {
       ...
   }
   ... on Circle {
       ...
   }
}
b
Thank you so much, this is super useful 🙏! I've built a version with more logs, and some exception catching, for you to try. You can install it from the plugin manager | ⚙️ | Install Plugin from Disk... (you should first uninstall the version you already have, and restart the IDE). Also, to get all the logs, you should go to Help | Diagnostic Tools | Debug Log Settings... and add
Apollo
in there
g
Sorry took awhile to get back to this. With this version of the plugin I’m getting UI freeze and these are the logs
b
Hi! Again, thanks a lot for trying it, this is very helpful 🙏. From what I see in the logs: 1. One of the searches returns a lot of elements, which may explain the freeze. Maybe some of the processing should be in a background thread and isn’t - I’ll investigate. By the way I’d be interested to know your project’s size (lines of code)? We haven’t tested the migration in a lot of projects unfortunately, and never noticed a freeze before but maybe we just never tested in a big enough project to trigger the issue. 2. There’s also a
Unsupported reference
crash while searching. The ‘good news’ is that the latest version of the plugin (4.0.0-rc.1) is resilient and will not crash. The bad news is that this may render the refactoring useless - it won’t find the necessary references, and your code won’t be migrated (would still be interesting to try the RC version and see what happens). It’s hard to tell what triggers it. Looks like it can happen if a references is found in something that’s neither Kotlin, Java, Groovy, Scala or Clojure… If that rings a bell 😅
👍 1
g
What are the elements in this context? I can check what the references are. Idk lines of code, I will check but we have about 6K types being generated, used types. In the metadata module
b
> What are the elements in this context? So it's looking for all Operations and Fragments Data classes in your project, and found about 13K of them. Then it searches for all references to them - and that's where it crashes.
g
Ok that sounds right then. I think
Unsupported reference
one is for inline fragments but I don’t see anything that stands out. Is it complicated to pull in the source code and run this locally maybe I can debug? Update: I can build it locally and see the output in
build/distributions
folders but is there a way to hot load it to the IDE without going Plugins -> Install and restarting the IDE? And no Scala or anything. It is just an Android ViewModel class
So the latest main doesn’t crash, it continues processing. I put more logs to fragments and it looks for references of fragments. Can it then just migrate as much as it can and leave failing classes? we can do those failing classes manually? I will try and see how far it goes. At least i can see logs at fragment level now
b
Thanks a lot for looking into it! Rather than (or in addition to) adding logs, you can actually run an IDE with the plugin itself from the IDE, so you can put breakpoints etc. You should see a running configuration named "Run IntelliJ plugin", you can run that. By default this will run some version of IntelliJ though, to run Android Studio instead you can add this line to your
~/.gradle/gradle.properties
file:
Copy code
apolloIntellijPlugin.ideDir=/Applications/Android Studio <http://Stable.app/Contents|Stable.app/Contents>
(assuming that's the right path to Android Studio on your machine / also assuming MacOS) One little annoyance: AS will say the GraphQL plugin must be updated - you'll need to do that, quit, and run again.
g
Didn’t get that error and everything works fine, but I’m getting memory errors. AS defaulted to 2048MB, if i change it then I have to restart the IDE. And the value doesn’t persist. VM Options button is disabled. I manually modified
studio.vmoptions
but I don’t think it is using that file at all I tried this but no luck.
Copy code
runIde {
  maxHeapSize = "8g"
}
I think I found the problem:
typealias
. All 3 files that failed had
typealias
in them. The YouTrack issue you commented on, and linked tickets also had
typealias
in them. Is it possible to ignore those files and continue processing?
image.png
b
Hi! Great find about the typealiases! This should help me to repro at least. I'm not sure if there's a way we can ignore these however. Since the exception happens during the reference search - it ends up breaking the whole search - so we don't have access to the results that did work. Maybe there are some clever hacks though...
Well with typealiases, I now manage to repro the
"Unsupported reference"
exception - but it looks like it's correctly caught by the IDE in my case (doesn't reach the plugin's try/catch, and the reference is correctly returned and handled). Was wondering: which version of AS are you running? Also maybe can you share what kind of typealiasing you have in your project, maybe my repro case is a bit different.
g
I noticed the same for this error but execution never completes. I was actually testing with small subset of types to see if it completes but didn't have much time to continue the work. I also want to swap project scope logic with module scope and see if a smaller set of classes would be easier and faster to complete. Now that I have debugger working I can investigate more
b
👍 Well don't hesitate if you have more info.
👍 1
g
I think the main issue is the Ui Freeze. I changed the flatMap to flatMapIndexed. There are 680 inline fragment properties total. After 4-5 it freezes. I put the search call in a coroutine and so far no freezes but it is slow. For 60 properties it took 15 mins; avg 15 secondsper property. For 680 properties, this would take 3 hours. I will let it run and see
b
Wow I'm wondering why it's so slow. Maybe the search scope is too broad (there was an issue previously where it was looking at files outside the project). Also, when I checked, this was called from a background thread, so I'm wondering why it causes a freeze.
🤔 1
g
As I typed this it went from 60th ref to 120th in 2 mins. It’s weird. Some times it takes 1 seconds, some times it is up to 30 seconds. I guess I can modify the parts and work with this. As long as it doesn’t freeze I should be good. In the meantime if you guys come up with any update let me know
👍 1
b
hmm weird. Maybe it's a case of not enough memory again (GC being triggered can slow down everything) 🤔
g
Possibly,,I tried everything to increase memory no luck. jvmArgs, VM Options flags,
runIde
settings, no luck. But Ui freeze was also happening when I tested the RC build, which had 8g memory
b
all right - probably not that then
g
Refactoring action might be causing this. There are similar bug reports from 15 years ago; with new comments from last year. I thought the dialog shown was controlled by us, but it looks like it is the default dialog from refactoring, this might explain. https://youtrack.jetbrains.com/issue/IDEA-24296/Migrate-type-refactor-locked-cpu-at-100
b
😅 wow
😆 1
g
I played around with this a bit more just to beat my curiosity lol A couple of learnings/observations: • I applied small subset of fragment instead of full 600. I tried with 200, 100, 50. It always reaches a number then it slows down. ◦ It succeeds if you make it small enough but hard to tell what the sweet spot is at it always varies. • Tried looping through modules and searching in
moduleScope
. This works well for a few modules but when you loop through all modules (320 modules including test/unittest) you run into same issue. We will either do this migration manually, or I will split the modules to 5-6 groups and run the migration separately.
b
Thanks a lot for the follow up. Well it's sad that there doesn't seem to be a solution to make it work. It looks like some kind of memory leak but hard to tell.